App's Sales Funnel

Alexander Feldman V.1.2

This project investigate user behavior for the company's app.
The goals of the project:

  • Study funnel of sales.
  • Give recommendations about applying a new font for the design of the app, basing on the results of A/A/B testing.

1. Open the data file and read the general information.

In [1]:
#!pip install seaborn --upgrade
#!pip install plotly --upgrade
In [2]:
# import libraries
import math
import pandas as pd
import numpy as np
import scipy.stats as st
from matplotlib import pyplot as plt
import seaborn as sns
import plotly.express as px
from plotly import graph_objects as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
In [3]:
# open the dataset
try:
    data = pd.read_csv('/datasets/logs_exp_us.csv', sep = '\t') # path for working on the platform
except:
    data = pd.read_csv('datasets/logs_exp_us.csv', sep = '\t') # path for local working
In [4]:
data.head()
Out[4]:
EventName DeviceIDHash EventTimestamp ExpId
0 MainScreenAppear 4575588528974610257 1564029816 246
1 MainScreenAppear 7416695313311560658 1564053102 246
2 PaymentScreenSuccessful 3518123091307005509 1564054127 248
3 CartScreenAppear 3518123091307005509 1564054127 248
4 PaymentScreenSuccessful 6217807653094995999 1564055322 248
In [5]:
display(data.info(), data.describe())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 244126 entries, 0 to 244125
Data columns (total 4 columns):
 #   Column          Non-Null Count   Dtype 
---  ------          --------------   ----- 
 0   EventName       244126 non-null  object
 1   DeviceIDHash    244126 non-null  int64 
 2   EventTimestamp  244126 non-null  int64 
 3   ExpId           244126 non-null  int64 
dtypes: int64(3), object(1)
memory usage: 7.5+ MB
None
DeviceIDHash EventTimestamp ExpId
count 2.441260e+05 2.441260e+05 244126.000000
mean 4.627568e+18 1.564914e+09 247.022296
std 2.642425e+18 1.771343e+05 0.824434
min 6.888747e+15 1.564030e+09 246.000000
25% 2.372212e+18 1.564757e+09 246.000000
50% 4.623192e+18 1.564919e+09 247.000000
75% 6.932517e+18 1.565075e+09 248.000000
max 9.222603e+18 1.565213e+09 248.000000
In [6]:
print('Dataframe has', len(data[data.duplicated()]),'duplicated values')
Dataframe has 413 duplicated values

No missing values. There are 413 duplicate rows. We should correction type of some columns.

2. Prepare the data for analysis.

  • Rename the columns in a way that's convenient for you.
  • Check for missing values and data types. Correct the data if needed.
  • Add a date and time column and a separate column for dates.
In [7]:
# rename columns
data.columns=['event', 'user_id', 'datetime', 'group']
In [8]:
# Handle duplicate rows
data = data.drop_duplicates()
In [9]:
# check whether users in only one group
both_246_247 = data[data['group']==246][data[data['group']==246]['user_id']
                                                            .isin(data[data['group']==247]['user_id'])]

both_246_248 = data[data['group']==246][data[data['group']==246]['user_id']
                                                            .isin(data[data['group']==248]['user_id'])]

both_247_248 = data[data['group']==247][data[data['group']==247]['user_id']
                                                            .isin(data[data['group']==248]['user_id'])]

print('Are there users who is in 246 and 247 groups?', len(both_246_247)>0)
print('Are there users who is in 246 and 248 groups?', len(both_246_248)>0)
print('Are there users who is in 247 and 248 groups?', len(both_247_248)>0)
Are there users who is in 246 and 247 groups? False
Are there users who is in 246 and 248 groups? False
Are there users who is in 247 and 248 groups? False
In [10]:
# Separate datetime column and correction of type of columns

data['datetime'] = pd.to_datetime(data['datetime'], unit='s')
data['date'] = data['datetime'].astype('datetime64[D]')
data['group'] = data['group'].astype('str')

The Dataframe is ready for analysis

3. Study and check the data.

3.1. The number of users and events.

  • How many events are in the logs?
In [11]:
#group data
events_number = data.groupby(['event'], as_index=False)['user_id'].count()
events_number = events_number.rename(columns={'user_id':'count_events'})
In [12]:
# plot bar chart
fig = px.bar(events_number, x="event", y="count_events", text="count_events",
            color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_traces(texttemplate='%{text}', textposition='auto')
fig.update_layout(yaxis=dict(title='Number of events'), xaxis=dict(title='Type of events'),
                 title={'text':'Number of events by groups', 'x':0.5})
fig.show()

As you can see from the graph, there are 119101 events on the MainScreen. CartScreen and OffersScreen have about 46808 and 42668 events, respectively, PaymentScreen has 34118 events, and Tutorial only 1018.

  • How many users are in the logs?
In [13]:
users_number = data.groupby('group', as_index=False).agg({'user_id':'nunique'})
fig = px.pie(users_number, values='user_id', names='group', color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_traces(textinfo='value + percent')
fig.update_layout(legend_title_text='Group', title={'text':'Number of unique users by group', 'x':0.5})
fig.show()

The groups are about the same size. The difference between the shares of groups is less than 0.7% of the total number of users.

  • What's the average number of events per user?
In [14]:
event_per_user = data.groupby(['group','user_id'], as_index=False).agg({'event':'count'})
In [15]:
# make violin plot for three groups:
fig = px.violin(event_per_user, y="event", x="group", box=True, points="all", color='group',
               title='The distribution of the number of events by users by group',
               color_discrete_sequence=px.colors.qualitative.Set1)
fig.show()
In [16]:
print('The average number of events by user is {:.2f}'.format(event_per_user['event'].mean()))
print('The median of number of events by user is {:.2f}'.format(event_per_user['event'].median()))
The average number of events by user is 32.28
The median of number of events by user is 20.00

As we can see from the graph there are a some abnormal users who have a huge number of events. It's outliers. Drop outlier users, who have more than 300 events.

In [17]:
abnormal_user_list = event_per_user[event_per_user['event'] > 300]['user_id']
data = data[~data['user_id'].isin(abnormal_user_list)].reset_index(drop=True)
users_number_2 = data.groupby('group', as_index=False).agg({'user_id':'nunique'})

3.2. Date and time of events.

  • What period of time does the data cover?
  • Find the maximum and the minimum date.
In [18]:
print('The studed period of time is {} days: from {:%Y-%m-%d} to {:%Y-%m-%d}'.format(
    (data['date'].max() - data['date'].min()).days, data['date'].min(), data['date'].max()))
The studed period of time is 13 days: from 2019-07-25 to 2019-08-07
  • Plot a histogram by date and time.
In [19]:
# plot a histogram
fig = px.histogram(data, x="datetime", color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_layout(title={'text':'Distribution of events by date and time', 'x':0.5})
fig.show()
  • Can you be sure that you have equally complete data for the entire period?
  • Older events could end up in some users' logs for technical reasons, and this could skew the overall picture.
  • Find the moment at which the data starts to be complete and ignore the earlier section.
  • What period does the data actually represent?

As we can see from the histogram the real time period is from 1 aug 2019 to 7 aug 2019.
Such technical data problems should not prevent us from analyzing, provided that the data of some users is not confused with the data of other users.

In [20]:
# drop the abnormal period
data = data[data['date'] >= '2019-08-01'].reset_index(drop=True)
In [21]:
print('The new period of time is {} days: from {:%Y-%m-%d} to {:%Y-%m-%d}'.format(
    (data['date'].max() - data['date'].min()).days, data['date'].min(), data['date'].max()))
The new period of time is 6 days: from 2019-08-01 to 2019-08-07

3.3. Excluding the older data.

  • Did you lose many events and users when excluding the older data?
In [22]:
# the change of number of events
events_number_before = events_number.groupby('event', as_index=False)['count_events'].sum()
events_number_after = data.groupby('event', as_index=False)['user_id'].count()
events_number_after = events_number_after.rename(columns={'user_id':'count_events'})
In [23]:
fig = go.Figure()
fig.add_trace(go.Bar(
    x=events_number_before['event'],
    y=events_number_before['count_events'],
    name='Before', text=events_number_before['count_events']
))
fig.add_trace(go.Bar(
    x=events_number_after['event'],
    y=events_number_after['count_events'],
    name='After', text=events_number_after['count_events']
))

fig.update_traces(texttemplate='%{text}', textposition='auto') 
fig.update_layout(yaxis=dict(title='Number of events'), xaxis=dict(title='Type of events'), 
                  title={'text':'Number of events before and after handle', 'x':0.5})
fig.show()

As we can see from the graph number of events reduce for all of events. The most losses for CartScreen (-24,5%) and Payment Screen (-28%) events.

In [24]:
# the change of number of users
users_number_3 = data.groupby('group', as_index=False).agg({'user_id':'nunique'})
fig = make_subplots(rows=1, cols=3, specs=[[{'type':'domain'}, {'type':'domain'}, {'type':'domain'}]])
fig.add_trace(go.Pie(labels=users_number['group'], values=users_number['user_id']),1,1)
fig.add_trace(go.Pie(labels=users_number_2['group'], values=users_number_2['user_id']),1,2)
fig.add_trace(go.Pie(labels=users_number_3['group'], values=users_number_3['user_id']),1,3)

fig.update_traces(hole=.1, textinfo='value + percent')
fig.update_layout(
    title_text="How many users did we lose for a handle?",
    annotations=[dict(text='Raw data', x=0.1, y=0.9, font_size=15, showarrow=False),
                 dict(text='Drop abnormal users', x=0.5, y=0.9, font_size=15, showarrow=False),
                 dict(text='Drop abnormal days', x=0.95, y=0.9, font_size=15, showarrow=False)])
fig.show()

The losess of users less than 1%. The proportions of groups almost didn't change.

4. Study the event funnel.

  • See what events are in the logs and their frequency of occurrence. Sort them by frequency.
In [25]:
#  Plot distributions of events
plt.figure(figsize=(18,6))
sns.kdeplot(data=data, x='datetime', hue='event', fill=True)
plt.title('The frequency of events by types', fontdict={'size':15})
plt.xlabel('Time')
plt.gca().spines["top"].set_alpha(0.0)    
plt.gca().spines["bottom"].set_alpha(0.3)
plt.gca().spines["right"].set_alpha(0.0)    
plt.gca().spines["left"].set_alpha(0.3)  
plt.show()

The event MainScreen has the largest frequency. Then OffersScreen, CartScreen, PaymentScreen. The last place is frequency has the event Tutorial.
It is noteworthy that on weekdays the number of visits to MainScreen is noticeably higher than on weekends (3/08, 4/08).

  • Find the number of users who performed each of these actions. Sort the events by the number of users. Calculate the proportion of users who performed the action at least once.
In [26]:
# Plot graph
event_users = data.groupby('event', as_index=False).agg({'user_id':'nunique'})
event_users = event_users.rename(columns={'user_id':'n_users'})
event_users = event_users.sort_values('n_users', ascending=False)
fig = px.bar(event_users, x="event", y="n_users", text="n_users", color='event',
             color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_traces(texttemplate='%{text}', textposition='outside')
fig.update_layout(yaxis=dict(title='Number of users'), xaxis=dict(title='Type of events'),
                 title={'text':'Number of users by events', 'x':0.5})
fig.show()
In [27]:
fig = px.pie(event_users, values='n_users', names='event',
             color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_traces(textinfo='value + percent')
fig.update_layout(legend_title_text='Events', 
                  title={'text':' The proportion of users who performed the action at least once', 'x':0.5})
fig.show()

Almost 40% (7387) of users are MainScreen, 23% (4561) of users are OfferScreen, 18,5% (3702), and 17,5% (3507) of users are CartScreen and PaymentScreen respectively. And only 4% (835) of ones are Tutorial.

  • In what order do you think the actions took place. Are all of them part of a single sequence? You don't need to take them into account when calculating the funnel.

The logic dictates that the order of events in the funnel should look like this:

  1. MainScreenAppear
  2. OffersScreenAppear
  3. CartScreenAppear
  4. PaymentScreenSuccessful

Tutorial does not participate in the funnel because users can navigate to Tutorial from any stage.

Let's check an order of passing of users for funnel. Whether abnormal ways?

In [28]:
stage = ['MainScreenAppear', 'OffersScreenAppear', 'CartScreenAppear', 'PaymentScreenSuccessful']
for i in range(1,4):
    check_stage = data[data['event']==stage[i]][~data[data['event']==stage[i]]['user_id']
                                                            .isin(data[data['event']==stage[i-1]]['user_id'])]
    n_user = check_stage['user_id'].nunique()
    print('{} users from the stage {} have never been on the stage {}'.format(n_user, stage[i], stage[i-1]))
111 users from the stage OffersScreenAppear have never been on the stage MainScreenAppear
55 users from the stage CartScreenAppear have never been on the stage OffersScreenAppear
5 users from the stage PaymentScreenSuccessful have never been on the stage CartScreenAppear

As we can see, there are several abnormal users in terms of funnels.
I guess this is an echo of technical problems with data noticed above, when older events were reflected as newer. This should not affect the results of the analysis, since we ourselves will correctly distribute events in the funnel.
However, we will share our findings with the developers.

  • Use the event funnel to find the share of users that proceed from each stage to the next. (For instance, for the sequence of events A → B → C, calculate the ratio of users at stage B to the number of users at stage A and the ratio of users at stage C to the number at stage B.)
  • At what stage do you lose the most users?
In [29]:
# drop Tutorial from funnel
funnel_users = event_users[event_users['event']!='Tutorial']
In [30]:
fig = go.Figure(go.Funnel(
    y = funnel_users['event'],
    x = funnel_users['n_users'],
    textinfo = 'percent previous + value',
    marker = {"color": ["rgb(128,177,211)", "rgb(253,180,98)", "rgb(179,222,105)", "rgb(217,217,217)"]}
   ))
fig.update_layout(title={'text':'Events funnel with share of users from the previous stage', 'x':0.5},
                 yaxis=dict(title='Events steps'))
fig.show()

As we can see from the funnel graph we lose the most users for the OffersScreen stage (62%). The most successful stage is PaymentScreen (95%). The last number say us about the very successful realization of a payment step.
This values show us also churn rate of the funnel (as the share of previous - 1): 0% -> -38% -> -19% -> -5% respectively.
The biggest concern is the transition from the MainScreen to the OffersScreen. Only 62% of users go through this stage. To improve the conversion of this step, we can try redesigning the MainScreen and improving the links to the OffersScreen. Next, it's worth testing the hypothesis that the new design will improve the conversion of this stage.

  • What share of users make the entire journey from their first event to payment?
In [31]:
# Group data
user_events = data[data['event']!='Tutorial'].groupby('user_id', as_index=False).agg({'event':'nunique'})
user_full_funnel = user_events[user_events['event']==4]
remain_users = user_events[user_events['event']<4]
# plot a graph
fig = go.Figure(data=[go.Pie(
    labels=['The entire journey users', 'The remain users'],
    values=[len(user_full_funnel), len(remain_users)]
)])
fig.update_traces(textinfo='value + percent', hole=.2)
fig.update_layout(legend_title_text='Events', title={'text':'Share of users make the entire journey via a funnel', 'x':0.5})
fig.show()

45% of users made the entire journey via a funnel. In fact, this conversion rate. It's great value for CR.

5. Study the results of the experiment.

  • How many users are there in each group?
In [32]:
users_number = data.groupby('group', as_index=False).agg({'user_id':'nunique'})
fig = px.pie(users_number, values='user_id', names='group',
             color_discrete_sequence=px.colors.qualitative.Set3)
fig.update_traces(textinfo='value + percent')
fig.update_layout(legend_title_text='Group', title={'text':'Number of unique users by group', 'x':0.5})
fig.show()

We have groups that are almost similar in the number of users and proportions.

  • We have two control groups in the A/A test, where we check our mechanisms and calculations. See if there is a statistically significant difference between samples 246 and 247.
  • Select the most popular event. In each of the control groups, find the number of users who performed this action. Find their share. Check whether the difference between the groups is statistically significant. Repeat the procedure for all other events (it will save time if you create a special function for this test). Can you confirm that the groups were split properly?
In [33]:
# Group data
pivot = data.pivot_table(index='event', values='user_id', columns='group', aggfunc='nunique').reset_index()
pivot
Out[33]:
group event 246 247 248
0 CartScreenAppear 1256 1231 1215
1 MainScreenAppear 2440 2469 2478
2 OffersScreenAppear 1532 1513 1516
3 PaymentScreenSuccessful 1190 1151 1166
4 Tutorial 275 283 277

Define the level of alpha

Since we have multiple testing we need to correct up our alpha level. Let's apply the Holm method of correction. For this, we will calculate the alpha level for each test depending on the number of tests, and depending on which test it is.
Totally we will do 20 tests.
base alpha level = 0.05

In [34]:
# make a test counter
m = 20 + 1 # add +1 for a technical reason

A/A test

The formulation of hypotheses:

  • Null hypothesis: The share of users who performed event between the groups 246 and 247 are equal.
  • Alternative hypothesis: The share of users who performed event between the groups 246 and 247 have a significant difference.
In [35]:
def check_hypothesis(group1,group2, event, alpha):
    alpha = alpha / m #correction for alpha
    if len(group1) > 3:
        group1_1 = group1[:3]
        group1_2 = group1[-3:]
        successes1=(pivot[pivot['event']==event][group1_1].iloc[0]) + (pivot[pivot['event']==event][group1_2].iloc[0])
        trials1=data[data['group']==group1_1]['user_id'].nunique() + data[data['group']==group1_2]['user_id'].nunique()
    else:
        successes1=pivot[pivot['event']==event][group1].iloc[0]
        trials1=data[data['group']==group1]['user_id'].nunique()
    
    successes2=pivot[pivot['event']==event][group2].iloc[0]
    trials2=data[data['group']==group2]['user_id'].nunique()
    
    #proportion for success in the first group
    p1 = successes1/trials1

   #proportion for success in the second group
    p2 = successes2/trials2

    # proportion in a combined dataset
    p_combined = (successes1 + successes2) / (trials1 + trials2)

    difference = p1 - p2
    z_value = difference / math.sqrt(p_combined * (1 - p_combined) * (1/trials1 + 1/trials2))
    distr = st.norm(0, 1) 
    p_value = (1 - distr.cdf(abs(z_value))) * 2
    
    print ('The corrected alpha: ', alpha)
    print('p-value: ', p_value)
    if (p_value < alpha):
        print("We reject the null hypothesis for",event, 'for groups',group1,'and',group2)
    else:
        print("We can't reject the null hypothesis for", event,'for groups',group1,'and',group2)
   
In [36]:
for i in pivot['event'].unique():
    m = m - 1 # correction the test counter
    check_hypothesis('246', '247', i, alpha=0.05)
    
The corrected alpha:  0.0025
p-value:  0.24545492108658662
We can't reject the null hypothesis for CartScreenAppear for groups 246 and 247
The corrected alpha:  0.002631578947368421
p-value:  0.7610729163957668
We can't reject the null hypothesis for MainScreenAppear for groups 246 and 247
The corrected alpha:  0.002777777777777778
p-value:  0.26218736176825974
We can't reject the null hypothesis for OffersScreenAppear for groups 246 and 247
The corrected alpha:  0.0029411764705882353
p-value:  0.1249307201728882
We can't reject the null hypothesis for PaymentScreenSuccessful for groups 246 and 247
The corrected alpha:  0.003125
p-value:  0.8427918367072245
We can't reject the null hypothesis for Tutorial for groups 246 and 247

As we can see A/A test has done successful. The groups are equal for all events.

  • Do the same thing for the group with altered fonts. Compare the results with those of each of the control groups for each event in isolation. Compare the results with the combined results for the control groups. What conclusions can you draw from the experiment?

A/B test #1

The formulation of hypotheses:

  • Null hypothesis: The share of users who performed event between the groups 246 and 248 are equal.
  • Alternative hypothesis: The share of users who performed event between the groups 246 and 248 have a significant difference.
In [37]:
for i in pivot['event'].unique():
    m = m - 1 # correction the test counter
    check_hypothesis('246', '248', i, alpha=0.05)
The corrected alpha:  0.0033333333333333335
p-value:  0.06694174479858339
We can't reject the null hypothesis for CartScreenAppear for groups 246 and 248
The corrected alpha:  0.0035714285714285718
p-value:  0.29108407869663555
We can't reject the null hypothesis for MainScreenAppear for groups 246 and 248
The corrected alpha:  0.0038461538461538464
p-value:  0.1889674821179712
We can't reject the null hypothesis for OffersScreenAppear for groups 246 and 248
The corrected alpha:  0.004166666666666667
p-value:  0.18624034844132686
We can't reject the null hypothesis for PaymentScreenSuccessful for groups 246 and 248
The corrected alpha:  0.004545454545454546
p-value:  0.881484521728291
We can't reject the null hypothesis for Tutorial for groups 246 and 248

As we can see A/B test #1 has done successful. The groups are equal for all events.

A/B test #2

The formulation of hypotheses::

  • Null hypothesis: The share of users who performed event between the groups 247 and 248 are equal.
  • Alternative hypothesis: The share of users who performed event between the groups 247 and 248 have a significant difference.
In [38]:
for i in pivot['event'].unique():
    m = m - 1 # correction the test counter
    check_hypothesis('247', '248', i, alpha=0.05)
The corrected alpha:  0.005
p-value:  0.5021725207005301
We can't reject the null hypothesis for CartScreenAppear for groups 247 and 248
The corrected alpha:  0.005555555555555556
p-value:  0.45009892631077086
We can't reject the null hypothesis for MainScreenAppear for groups 247 and 248
The corrected alpha:  0.00625
p-value:  0.8482836962001166
We can't reject the null hypothesis for OffersScreenAppear for groups 247 and 248
The corrected alpha:  0.0071428571428571435
p-value:  0.8291560168561394
We can't reject the null hypothesis for PaymentScreenSuccessful for groups 247 and 248
The corrected alpha:  0.008333333333333333
p-value:  0.7272001803840036
We can't reject the null hypothesis for Tutorial for groups 247 and 248

As we can see A/B test #2 has done successful. The groups are equal for all events.

A/B test #3

The formulation of hypotheses::

  • Null hypothesis: The share of users who performed event between the combined control group (246 + 247) and 248 group are equal.
  • Alternative hypothesis: The share of users who performed event between the combined control group (246 + 247) and 248 group have a significant difference.
In [39]:
for i in pivot['event'].unique():
    m = m - 1 # correction the test counter
    check_hypothesis('246_247', '248', i, alpha=0.05)
The corrected alpha:  0.01
p-value:  0.14890240608127048
We can't reject the null hypothesis for CartScreenAppear for groups 246_247 and 248
The corrected alpha:  0.0125
p-value:  0.28814344245727863
We can't reject the null hypothesis for MainScreenAppear for groups 246_247 and 248
The corrected alpha:  0.016666666666666666
p-value:  0.38636546810066363
We can't reject the null hypothesis for OffersScreenAppear for groups 246_247 and 248
The corrected alpha:  0.025
p-value:  0.5251304984188463
We can't reject the null hypothesis for PaymentScreenSuccessful for groups 246_247 and 248
The corrected alpha:  0.05
p-value:  0.7732477113992795
We can't reject the null hypothesis for Tutorial for groups 246_247 and 248

As we can see combined A/B test #3 has done successful. The groups are equal for all events.

  • What significance level have you set to test the statistical hypotheses mentioned above? Calculate how many statistical hypothesis tests you carried out. With a statistical significance level of 0.1, one in 10 results could be false. What should the significance level be? If you want to change it, run through the previous steps again and check your conclusions.

During testing, we applied an alpha correction using the Holm method, so we took into account the likelihood of testing errors.

6. Conclusion and recommendation

We had the following of the goals of the project:

  • Study funnel of sales.
  • Give recommendations about applying a new font for the design of the app, based on the results of A/A/B testing.

We successfully figured and studied the funnel of sales and made A/A/B tests for applying a new font for the design of the app.

Conclusion:

  • The quality of user activity data for all groups is sufficient for analysis.
  • We have some problems with the formation of logs (not critical), when the time and date of events are confusing.
  • There is an assumption that user activity decreases on weekends.
  • The funnel shows us that we have excellent conversion rates (45%) and retention rates of funnel (from -5% to -38%).
  • The results of A / A / B testing do not support the hypothesis about the usefulness of using the new font for application design.

Recommendation:

  • No use for a new font.
  • Planing to test hypotheses about an influence of day of the week for the user activity.
  • Figure out how we can avoid technical problems with the time and the date of events.
  • Planing to update the design of MainScreen and testing hypotheses about improving the conversion.
In [ ]: